326 research outputs found
Solving the "Isomorphism of Polynomials with Two Secrets" Problem for all Pairs of Quadratic Forms
We study the Isomorphism of Polynomial (IP2S) problem with m=2 homogeneous
quadratic polynomials of n variables over a finite field of odd characteristic:
given two quadratic polynomials (a, b) on n variables, we find two bijective
linear maps (s,t) such that b=t . a . s. We give an algorithm computing s and t
in time complexity O~(n^4) for all instances, and O~(n^3) in a dominant set of
instances.
The IP2S problem was introduced in cryptography by Patarin back in 1996. The
special case of this problem when t is the identity is called the isomorphism
with one secret (IP1S) problem. Generic algebraic equation solvers (for example
using Gr\"obner bases) solve quite well random instances of the IP1S problem.
For the particular cyclic instances of IP1S, a cubic-time algorithm was later
given and explained in terms of pencils of quadratic forms over all finite
fields; in particular, the cyclic IP1S problem in odd characteristic reduces to
the computation of the square root of a matrix.
We give here an algorithm solving all cases of the IP1S problem in odd
characteristic using two new tools, the Kronecker form for a singular quadratic
pencil, and the reduction of bilinear forms over a non-commutative algebra.
Finally, we show that the second secret in the IP2S problem may be recovered in
cubic time
Statistical Properties of Short RSA Distribution and Their Cryptographic Applications
International audienceIn this paper, we study some computational security assump-tions involve in two cryptographic applications related to the RSA cryp-tosystem. To this end, we use exponential sums to bound the statistical distances between these distributions and the uniform distribution. We are interesting studying the k least (or most) significant bits of x e mod N , where N is a RSA modulus when x is restricted to a small part of [0, N). First of all, we provide the first rigorous evidence that the cryptographic pseudo-random generator proposed by Micali and Schnorr is based on firm foundations. This proof is missing in the original paper and do not cover the parameters chosen by the authors. Consequently, we extend the proof to get a new result closer to the parameters using a recent work of Wooley on exponential sums and we show some limitations of our technique. Finally, we look at the semantic security of the RSA padding scheme called PKCS#1 v1.5 which is still used a lot in practice. We show that parts of the ciphertexts are indistinguisable from uniform bitstrings
Computing -th roots in number fields
We describe several algorithms for computing -th roots of elements in a
number field , where is an odd prime-power integer. In particular we
generalize Couveignes' and Thom\'e's algorithms originally designed to compute
square-roots in the Number Field Sieve algorithm for integer factorization. Our
algorithms cover most cases of and and allow to obtain reasonable
timings even for large degree number fields and large exponents . The
complexity of our algorithms is better than general root finding algorithms and
our implementation compared well in performance to these algorithms implemented
in well-known computer algebra softwares. One important application of our
algorithms is to compute the saturation phase in the Twisted-PHS algorithm for
computing the Ideal-SVP problem over cyclotomic fields in post-quantum
cryptography.Comment: 9 pages, 4 figures. Associated experimental code provided at
https://github.com/ob3rnard/eth-root
Cryptanalysis of the New Multilinear Map over the Integers
This article describes a polynomial attack on the new multilinear map over the integers presented by Coron, Lepoint and Tibouchi at CRYPTO 2015 (CLT15). This version is a fix of the first multilinear map over the integers presented by the same authors at CRYPTO 2013 (CLT13) and broken by Cheon et al. at EUROCRYPT 2015. The attack essentially downgrades CLT15 to its original version CLT13, and leads to a full break of the multilinear map for virtually all applications. In addition to the main attack, we present an alternate probabilistic attack underpinned by a different technique, as well as an instant-time attack on the optimized variant of the scheme
Comparison between Subfield and Straightforward Attacks on NTRU
Recently in two independent papers, Albrecht, Bai and Ducas and Cheon, Jeong and Lee presented two
very similar attacks, that allow to break NTRU with larger parameters and GGH Multinear Map without
zero encodings. They proposed an algorithm for recovering the NTRU secret key given the public key
which apply for large NTRU modulus, in particular to Fully Homomorphic Encryption schemes based on
NTRU. Hopefully, these attacks do not endanger the security of the NTRUE NCRYPT scheme, but shed new
light on the hardness of this problem. The basic idea of both attacks relies on decreasing the dimension
of the NTRU lattice using the multiplication matrix by the norm (resp. trace) of the public key in some
subfield instead of the public key itself. Since the dimension of the subfield is smaller, the dimension of
the lattice decreases, and lattice reduction algorithm will perform better.
Here, we revisit the attacks on NTRU and propose another variant that is simpler and outperforms both
of these attacks in practice. It allows to break several concrete instances of YASHE, a NTRU-based FHE
scheme, but it is not as efficient as the hybrid method of Howgrave-Graham on concrete parameters of
NTRU. Instead of using the norm and trace, we propose to use the multiplication by the public key in
some subring and show that this choice leads to better attacks. We
√ can then show that for power of two
cyclotomic fields, the time complexity is polynomialFinally, we show that, under
heuristics, straightforward lattice reduction is even more efficient, allowing to extend this result to fields
without non-trivial subfields, such as NTRU Prime. We insist that the improvement on the analysis applies
even for relatively small modulus ; though if the secret is sparse, it may not be the fastest attack. We also
derive a tight estimation of security for (Ring-)LWE and NTRU assumptions. when
Time-Memory Trade-Off for Lattice Enumeration in a Ball
Enumeration algorithms in lattices are a well-known technique for solving the Short Vector Problem (SVP) and improving
blockwise lattice reduction algorithms.
Here, we propose a new algorithm for enumerating lattice point in a ball of radius
in time , where is the length of the shortest vector in the lattice . Then, we show how
this method can be used for solving SVP and the Closest Vector Problem (CVP)
with approximation factor in a -dimensional lattice in time .
Previous algorithms for enumerating take super-exponential running time with polynomial memory. For instance,
Kannan algorithm takes time , however ours also requires exponential memory and we propose different time/memory tradeoffs.
Recently, Aggarwal, Dadush, Regev and Stephens-Davidowitz describe a randomized algorithm with running
time at STOC\u27 15 for solving SVP and approximation version of SVP and CVP at FOCS\u2715.
However, it is not possible to use a
time/memory tradeoff for their algorithms. Their main result presents an algorithm that samples an exponential
number of random vectors in a Discrete Gaussian distribution with width below the smoothing parameter of the lattice.
Our algorithm is related to the hill climbing of Liu, Lyubashevsky and Micciancio from
RANDOM\u27 06 to solve the bounding decoding problem with preprocessing. It has been later improved by Dadush,
Regev, Stephens-Davidowitz for solving the CVP with preprocessing problem at CCC\u2714. However the latter algorithm only looks for
one lattice vector while we show that we can enumerate all lattice vectors in a ball. Finally, in these papers, they use a
preprocessing to obtain a succinct representation of some lattice function. We show in a first step that we
can obtain the same information using an exponential-time algorithm based on a collision search algorithm similar
to the reduction of Micciancio and Peikert for the SIS problem with small modulus at CRYPTO\u27 13
Achieving Better Privacy for the 3GPP AKA Protocol
Proposed by the 3rd Generation Partnership Project (3GPP) as a standard for 3G and 4G mobile-network communications, the AKA protocol is meant to provide a mutually-authenticated key-exchange between clients and associated network servers. As a result AKA must guarantee the indistinguishability from random of the session keys (key-indistinguishability), as well as client- and server-impersonation resistance. A paramount requirement is also that of client privacy, which 3GPP defines in terms of: user identity confidentiality,service untraceability,and location untraceability. Moreover, since servers are sometimes untrusted (in the case of roaming),the AKA protocol must also protect clients with respect to these third parties. Following the description of client-tracking attacks e.g. by using error messages or IMSI catchers, van den Broek et al. and respectively Arapinis et al. each proposed a new variant of AKA, addressing such problems. In this paper we use the approach of provable security to show that these variants still fail to guarantee the privacy of mobile clients. We propose an improvement of AKA, which retains most of its structure and respects practical necessities such as key management, but which provably attains security with respect to servers and Man-in-the-Middle (MiM) adversaries. Moreover, it is impossible to link client sessions in the absence of client-corruptions. Finally, we prove that any variant of AKA retaining its mutual authentication specificities cannot achieve client-unlinkability in the presence of corruptions. In this sense, our proposed variant is optimal
Efficient and Provable White-Box Primitives
In recent years there have been several attempts to build white-box block ciphers whose implementation aims to be incompressible. This includes the weak white-box ASASA construction by Bouillaguet, Biryukov and Khovratovich from Asiacrypt 2014, and the recent space-hard construction by Bogdanov and Isobe at CCS 2016. In this article we propose the first constructions aiming at the same goal while offering provable security guarantees. Moreover we propose concrete instantiations of our constructions, which prove to be quite efficient and competitive with prior work. Thus provable security comes with a surprisingly low overhead
- âŠ